|
Data-driven journalism, often shortened to "ddj", is a term in use since 2009, to describe a journalistic process based on analyzing and filtering large data sets for the purpose of creating a news story. Main drivers for this process are newly available resources such as open source software, open access publishing and open data. This approach to journalism builds on older practices, most notably on CAR (acronym for "computer-assisted reporting") a label used mainly in the US for decades. Other labels for partially similar approaches are "precision journalism", based on a book by (Philipp Meyer ), published in 1972, where he advocated the use of techniques from social sciences in researching stories. Data-driven journalism has an even wider approach. At the core the process builds on the growing availability of open data that is freely available online and analyzed with open source tools.〔Lorenz, Mirko (2010) (Data driven journalism: What is there to learn? ) Edited conference documentation, based on presentations of participants, 24 August 2010, Amsterdam, The Netherlands〕 Data-driven journalism strives to reach new levels of service for the public, helping consumers, managers, politicians to understand patterns and make decisions based on the findings. As such, data driven journalism might help to put journalists into a role relevant for society in a new way. As projects like the MP Expense Scandal (2009) and the 2013 release of the "Offshore leaks" demonstrate, data-driven journalism can assume an investigative role, dealing with "not-so open" aka secret data on occasion. ==Definitions== According to information architect and multimedia journalist Mirko Lorenz, data-driven journalism is primarily a ''workflow'' that consists of the following elements: ''digging deep'' into data by scraping, cleansing and structuring it, ''filtering'' by mining for specific information, ''visualizing'' and ''making a story''.〔Lorenz, Mirko. (2010). (Data driven journalism: What is there to learn? ) Presented at IJ-7 Innovation Journalism Conference, 7–9 June 2010, Stanford, CA〕 This process can be extended to provide information results that cater to individual interests and the broader public. Data journalism trainer and writer Paul Bradshaw describes the process of data-driven journalism in a similar manner: data must be ''found'', which may require specialized skills like MySQL or Python, then ''interrogated'', for which understanding of jargon and statistics is necessary, and finally ''visualized'' and ''mashed'' with the aid of open source tools.〔Bradshaw, Paul (1 October 2010). (How to be a data journalist ). ''The Guardian''〕 A more results driven definition comes from data reporter and web strategist Henk van Ess (2012).〔van Ess, Henk. (2012). (Gory details of data driven journalism )〕 "''Data-driven journalism enables reporters to tell untold stories, find new angles or complete stories via a workflow of finding, processing and presenting significant amounts of data (in any given form) with or without open source tools.''" Van Ess claims that some of the data-driven workflow leads to products that "''are not in orbit with the laws of good story telling''" because the result emphazes on showing the problem, not explaining the problem. "''A good data driven production has different layers. It allows you to find personalized details that are only important for you, by drilling down to relevant details but also enables you to zoom out to get the big picture". In 2013, Van Ess came with a shorter definition in 〔van Ess, Henk. (2013). (Handboek Datajournalistiek )〕 that doesn't involve visualisation per se: "''Datajournalism is journalism based on data that has to be processed first with tools before a relevant story is possible." 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Data-driven journalism」の詳細全文を読む スポンサード リンク
|